Goto

Collaborating Authors

 self-organizing network


Kohonen Feature Maps and Growing Cell Structures - a Performance Comparison

Neural Information Processing Systems

A performance comparison of two self-organizing networks, the Ko(cid:173) honen Feature Map and the recently proposed Growing Cell Struc(cid:173) tures is made. For this purpose several performance criteria for self-organizing networks are proposed and motivated. The models are tested with three example problems of increasing difficulty. The Kohonen Feature Map demonstrates slightly superior results only for the simplest problem. Additional advantages of the new model are that all parameters are constant over time and that size as well as structure of the network are determined auto(cid:173) matically.


Grammar Learning by a Self-Organizing Network

Neural Information Processing Systems

This paper presents the design and simulation results of a self(cid:173) organizing neural network which induces a grammar from exam(cid:173) ple sentences. Input sentences are generated from a simple phrase structure grammar including number agreement, verb transitiv(cid:173) ity, and recursive noun phrase construction rules. The network induces a grammar explicitly in the form of symbol categorization rules and phrase structure rules. The purpose of this research is to show that a self-organizing network with a certain structure can acquire syntactic knowledge from only positive (i.e. There has been research on supervised neural network models of language acquisi(cid:173) tion tasks [Elman, 1991, Miikkulainen and Dyer, 1988, John and McClelland, 1988].


The Next Decade of Telecommunications Artificial Intelligence

Ouyang, Ye, Wang, Lilei, Yang, Aidong, Su, Le, Belanger, David, Gao, Tongqing, Wei, Leping, Zhang, Yaqin

arXiv.org Artificial Intelligence

It has been an exciting journey since the mobile communications and artificial intelligence were conceived 37 years and 64 years ago. While both fields evolved independently and profoundly changed communications and computing industries, the rapid convergence of 5G and deep learning is beginning to significantly transform the core communication infrastructure, network management and vertical applications. The paper first outlines the individual roadmaps of mobile communications and artificial intelligence in the early stage, with a concentration to review the era from 3G to 5G when AI and mobile communications started to converge. With regard to telecommunications artificial intelligence, the paper further introduces in detail the progress of artificial intelligence in the ecosystem of mobile communications. The paper then summarizes the classifications of AI in telecom ecosystems along with its evolution paths specified by various international telecommunications standardization bodies. Towards the next decade, the paper forecasts the prospective roadmap of telecommunications artificial intelligence. In line with 3GPP and ITU-R timeline of 5G & 6G, the paper further explores the network intelligence following 3GPP and ORAN routes respectively, experience and intention driven network management and operation, network AI signalling system, intelligent middle-office based BSS, intelligent customer experience management and policy control driven by BSS and OSS convergence, evolution from SLA to ELA, and intelligent private network for verticals. The paper is concluded with the vision that AI will reshape the future B5G or 6G landscape and we need pivot our R&D, standardizations, and ecosystem to fully take the unprecedented opportunities.


The Yellow Brick Path to 5G: Why Self-Organizing, AI-Driven Networks Need a Little Extra Magic to Work with Existing Infrastructure

#artificialintelligence

The sheer amount of services and network complexity will require a step up of current network capabilities. Specifically, 5G networks will need to incorporate Artificial Intelligence (AI) and its offspring Machine Learning (ML). As AI/ML continue to gain steam and the rest of the business world gets on board, current networks are suffering from the lack of capabilities needed. To be honest, today's mostly manual, static networks are not suited for these advanced technologies. And while agile, self-organizing networks will exist in the future, service providers need to address their digital transformation efforts today, focusing on near-term solutions, to build the foundation for these networks of tomorrow.


A Growing Self-Organizing Network for Reconstructing Curves and Surfaces

Piastra, Marco

arXiv.org Artificial Intelligence

In the original Self-Organizing Map (SOM) algorithm by Teuvo Kohonen [1] a lattice of connected units learns a representation of an input data distribution. During the learning process, the weight vector - i.e. a position in the input space - associated to each unit is progressively adapted to the input distribution by finding the unit that best matches each input and moving it'closer' to that input, together with a subset of neighboring units, to an extent that decreases with the distance on the lattice from the best matching unit. As the adaptation progresses, the SOM tends to represent the topology input data distribution in the sense that it maps inputs that are'close' in the input space to units that are neighbors in the lattice. In the Neural Gas (NG) algorithm [2], the topology of the network of units is not fixed, as it is with SOMs, but is learnt from the input distribution as part of the adaptation process. In particular, Martinetz and Schulten have shown in [3] that, under certain conditions, the Neural Gas algorithm tends to constructing a restricted Delaunay graph, namely a triangulation with remarkable topological properties to be discussed later. They deem the structure constructed by the algorithm a topology representing network (TRN). Besides the thread of subsequent developments in the field of neural networks, the work by Martintetz and Schulten have raised also a considerable interest in the community of computational topology and geometry. The studies that followed in this direction have produced a number of theoretical results that are nowadays at the foundations of some popular methods for curve and surface reconstruction in computer graphics ([4]), although they have little or nothing in common with neural networks algorithms.


Grammar Learning by a Self-Organizing Network

Negishi, Michiro

Neural Information Processing Systems

Michiro Negishi Dept. of Cognitive and Neural Systems, Boston University 111 Cummington Street Boston, MA 02215 email: negishi@cns.bu.edu Abstract This paper presents the design and simulation results of a selforganizing neural network which induces a grammar from example sentences. Input sentences are generated from a simple phrase structure grammar including number agreement, verb transitivity, and recursive noun phrase construction rules. The network induces a grammar explicitly in the form of symbol categorization rules and phrase structure rules. 1 Purpose and related works The purpose of this research is to show that a self-organizing network with a certain structure can acquire syntactic knowledge from only positive (i.e. There has been research on supervised neural network models of language acquisition tasks [Elman, 1991, Miikkulainen and Dyer, 1988, John and McClelland, 1988]. Unlike these supervised models, the current model self-organizes word and phrasal categories and phrase construction rules through mere exposure to input sentences, without any artificially defined task goals.


Grammar Learning by a Self-Organizing Network

Negishi, Michiro

Neural Information Processing Systems

Michiro Negishi Dept. of Cognitive and Neural Systems, Boston University 111 Cummington Street Boston, MA 02215 email: negishi@cns.bu.edu Abstract This paper presents the design and simulation results of a selforganizing neural network which induces a grammar from example sentences. Input sentences are generated from a simple phrase structure grammar including number agreement, verb transitivity, and recursive noun phrase construction rules. The network induces a grammar explicitly in the form of symbol categorization rules and phrase structure rules. 1 Purpose and related works The purpose of this research is to show that a self-organizing network with a certain structure can acquire syntactic knowledge from only positive (i.e. There has been research on supervised neural network models of language acquisition tasks [Elman, 1991, Miikkulainen and Dyer, 1988, John and McClelland, 1988]. Unlike these supervised models, the current model self-organizes word and phrasal categories and phrase construction rules through mere exposure to input sentences, without any artificially defined task goals.



Kohonen Feature Maps and Growing Cell Structures - a Performance Comparison

Fritzke, Bernd

Neural Information Processing Systems

A performance comparison of two self-organizing networks, the Kohonen Feature Map and the recently proposed Growing Cell Structures is made. For this purpose several performance criteria for self-organizing networks are proposed and motivated. The models are tested with three example problems of increasing difficulty. The Kohonen Feature Map demonstrates slightly superior results only for the simplest problem.


Kohonen Feature Maps and Growing Cell Structures - a Performance Comparison

Fritzke, Bernd

Neural Information Processing Systems

A performance comparison of two self-organizing networks, the Kohonen Feature Map and the recently proposed Growing Cell Structures is made. For this purpose several performance criteria for self-organizing networks are proposed and motivated. The models are tested with three example problems of increasing difficulty. The Kohonen Feature Map demonstrates slightly superior results only for the simplest problem.